composite bayesian optimization
Constrained composite Bayesian optimization for rational synthesis of polymeric particles
Wang, Fanjin, Parhizkar, Maryam, Harker, Anthony, Edirisinghe, Mohan
Polymeric nano- and micro-scale particles have critical roles in tackling critical healthcare and energy challenges with their miniature characteristics. However, tailoring their synthesis process to meet specific design targets has traditionally depended on domain expertise and costly trial-and-errors. Recently, modeling strategies, particularly Bayesian optimization (BO), have been proposed to aid materials discovery for maximized/minimized properties. Coming from practical demands, this study for the first time integrates constrained and composite Bayesian optimization (CCBO) to perform efficient target value optimization under black-box feasibility constraints and limited data for laboratory experimentation. Using a synthetic problem that simulates electrospraying, a model nanomanufacturing process, CCBO strategically avoided infeasible conditions and efficiently optimized particle production towards predefined size targets, surpassing standard BO pipelines and providing decisions comparable to human experts. Further laboratory experiments validated CCBO capability to guide the rational synthesis of poly(lactic-co-glycolic acid) (PLGA) particles with diameters of 300 nm and 3.0 $\mu$m via electrospraying. With minimal initial data and unknown experiment constraints, CCBO reached the design targets within 4 iterations. Overall, the CCBO approach presents a versatile and holistic optimization paradigm for next-generation target-driven particle synthesis empowered by artificial intelligence (AI).
- Europe > United Kingdom (0.15)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (4 more...)
Composite Bayesian Optimization In Function Spaces Using NEON -- Neural Epistemic Operator Networks
Guilhoto, Leonardo Ferreira, Perdikaris, Paris
High-dimensional problems are prominent across all corners of science and industrial applications. Within this realm, optimizing black-box functions and operators can be computationally expensive and require large amounts of hardto-obtain data for training surrogate models. Uncertainty quantification becomes a key element in this setting, as the ability to quantify what a surrogate model does not know offers a guiding principle for new data acquisition. However, existing methods for surrogate modeling with built-in uncertainty quantification, such as Gaussian Processes (GPs) [1], have demonstrated difficulty in modeling problems that exist in high dimensions. While other methods such as Bayesian neural networks [2] (BNNs) and deep ensembles [3] are able to mitigate this issue, their computational cost can still be prohibitive for some applications. This problem becomes more prominent in Operator Learning, where either inputs or outputs of a model are functions residing in infinite-dimensional function spaces. The field of Operator Learning has had many advances in recent years[4, 5, 6, 7, 8, 9], with applications across many domains in the natural sciences and engineering, but so far its integration with uncertainty quantification is limited [10, 11]. In addition to safety-critical problems using deep learning such as ones in medicine [12, 13] and autonomous driving [14], the generation of uncertainty measures can also be important for decision making when collecting new data in the physical sciences. Total uncertainty is often made up of two distinct parts: epistemic and aleatoric uncertainty.
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.14)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Energy (0.68)
- Transportation (0.54)
- Health & Medicine > Therapeutic Area (0.46)
- Government > Regional Government (0.46)